149 research outputs found

    DeepSketchHair: Deep Sketch-based 3D Hair Modeling

    Full text link
    We present sketchhair, a deep learning based tool for interactive modeling of 3D hair from 2D sketches. Given a 3D bust model as reference, our sketching system takes as input a user-drawn sketch (consisting of hair contour and a few strokes indicating the hair growing direction within a hair region), and automatically generates a 3D hair model, which matches the input sketch both globally and locally. The key enablers of our system are two carefully designed neural networks, namely, S2ONet, which converts an input sketch to a dense 2D hair orientation field; and O2VNet, which maps the 2D orientation field to a 3D vector field. Our system also supports hair editing with additional sketches in new views. This is enabled by another deep neural network, V2VNet, which updates the 3D vector field with respect to the new sketches. All the three networks are trained with synthetic data generated from a 3D hairstyle database. We demonstrate the effectiveness and expressiveness of our tool using a variety of hairstyles and also compare our method with prior art

    DCL: Differential Contrastive Learning for Geometry-Aware Depth Synthesis

    Full text link
    We describe a method for unpaired realistic depth synthesis that learns diverse variations from the real-world depth scans and ensures geometric consistency between the synthetic and synthesized depth. The synthesized realistic depth can then be used to train task-specific networks facilitating label transfer from the synthetic domain. Unlike existing image synthesis pipelines, where geometries are mostly ignored, we treat geometries carried by the depth scans based on their own existence. We propose differential contrastive learning that explicitly enforces the underlying geometric properties to be invariant regarding the real variations been learned. The resulting depth synthesis method is task-agnostic, and we demonstrate the effectiveness of the proposed synthesis method by extensive evaluations on real-world geometric reasoning tasks. The networks trained with the depth synthesized by our method consistently achieve better performance across a wide range of tasks than state of the art, and can even surpass the networks supervised with full real-world annotations when slightly fine-tuned, showing good transferability.Comment: Accepted by International Conference on Robotics and Automation (ICRA) 2022 and RA-L 202

    PSC 352.01: American Political Thought

    Get PDF
    In this paper, we study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object and predict as output the semantic correspondence among the sketches. This problem is challenging since the visual features of corresponding points at different views can be very different. To this end, we take a deep learning approach and learn a novel local sketch descriptor from data. We contribute a training dataset by generating the pixel-level correspondence for the multi-view line drawings synthesized from 3D shapes. To handle the sparsity and ambiguity of sketches, we design a novel multi-branch neural network that integrates a patch-based representation and a multiscale strategy to learn the pixel-level correspondence among multi-view sketches. We demonstrate the effectiveness of our proposed approach with extensive experiments on hand-drawn sketches and multi-view line drawings rendered from multiple 3D shape datasets

    GCN-Denoiser: Mesh Denoising with Graph Convolutional Networks

    Full text link
    In this paper, we present GCN-Denoiser, a novel feature-preserving mesh denoising method based on graph convolutional networks (GCNs). Unlike previous learning-based mesh denoising methods that exploit hand-crafted or voxel-based representations for feature learning, our method explores the structure of a triangular mesh itself and introduces a graph representation followed by graph convolution operations in the dual space of triangles. We show such a graph representation naturally captures the geometry features while being lightweight for both training and inference. To facilitate effective feature learning, our network exploits both static and dynamic edge convolutions, which allow us to learn information from both the explicit mesh structure and potential implicit relations among unconnected neighbors. To better approximate an unknown noise function, we introduce a cascaded optimization paradigm to progressively regress the noise-free facet normals with multiple GCNs. GCN-Denoiser achieves the new state-of-the-art results in multiple noise datasets, including CAD models often containing sharp features and raw scan models with real noise captured from different devices. We also create a new dataset called PrintData containing 20 real scans with their corresponding ground-truth meshes for the research community. Our code and data are available in https://github.com/Jhonve/GCN-Denoiser.Comment: Accepted by ACM Transactions on Graphics 202

    First determination of Pu isotopes (239Pu, 240Pu and 241Pu) in radioactive particles derived from Fukushima Daiichi Nuclear Power Plant accident

    Get PDF
    Radioactive particles were released into the environment during the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident. Many studies have been conducted to elucidate the chemical composition of released radioactive particles in order to understand their formation process. However, whether radioactive particles contain nuclear fuel radionuclides remains to be investigated. Here, we report the first determination of Pu isotopes in radioactive particles. To determine the Pu isotopes (239Pu, 240Pu and 241Pu) in radioactive particles derived from the FDNPP accident which were free from the influence of global fallout, radiochemical analysis and inductively coupled plasma-mass spectrometry measurements were conducted. Radioactive particles derived from unit 1 and unit 2 or 3 were analyzed. For the radioactive particles derived from unit 1, activities of 239+240Pu and 241Pu were (1.70-7.06)×10-5 Bq and (4.10-8.10)×10-3 Bq, respectively and atom ratios of 240Pu/239Pu and 241Pu/239Pu were 0.330-0.415 and 0.162-0.178, respectively. These ratios were consistent with the simulation results from ORIGEN code and measurements from various environmental samples. In contrast, Pu was not detected in the radioactive particles derived from unit 2 or 3. The difference in Pu contents is clear evidence towards different formation processes of radioactive particles, and detailed formation processes can be investigated from Pu analysis
    corecore